Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
AJPM Focus ; 2(3): 100101, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37790674

ABSTRACT

Introduction: Healthcare systems such as Kaiser Permanente are increasingly focusing on patients' social health. However, there is limited evidence to guide social health integration strategy. The purpose of this study was to identify social health research opportunities using a stakeholder-driven process. Methods: A modified Concept Mapping approach was implemented from June 2021 to February 2022. Stakeholders (n=746) received the prompt, "One thing I wish we knew more about to advance my work addressing social health..." An inductive content analysis approach was used to assign topics and synthesize and refine research-focused statements into research questions. Questions were then rated on impact and priority by researcher stakeholders (n=16). Mean impact and priority scores and an overall combined score were calculated. Question rankings were generated using the combined score. Results: Brainstorming produced 148 research-focused statements. A final list of 59 research questions was generated for rating. Question topics were (1) Data, Measures, and Metrics; (2) Intervention Approach and Impact; (3) Technology; (4) Role of Healthcare Systems; (5) Community-Based Organizations; (6) Equity; (7) Funding; and (8) Social Health Integration. On a scale from 1 (low) to 10 (high), the mean impact score was 6.12 (range=4.14-7.79), and the mean priority score was 5.61 (range=3.07-8.64). Twenty-four statements were rated as both high impact (>6.12) and high priority (>5.61). Conclusions: The broad range of topics with high impact and priority scores reveals how nascent the evidence base is, with fundamental research on the nature of social risk and health system involvement still needed.

2.
Implement Sci ; 18(1): 3, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36726127

ABSTRACT

BACKGROUND: Experts recommend that treatment for substance use disorder (SUD) be integrated into primary care. The Digital Therapeutics for Opioids and Other SUD (DIGITS) Trial tests strategies for implementing reSET® and reSET-O®, which are prescription digital therapeutics for SUD and opioid use disorder, respectively, that include the community reinforcement approach, contingency management, and fluency training to reinforce concept mastery. This purpose of this trial is to test whether two implementation strategies improve implementation success (Aim 1) and achieve better population-level cost effectiveness (Aim 2) over a standard implementation approach. METHODS/DESIGN: The DIGITS Trial is a hybrid type III cluster-randomized trial. It examines outcomes of implementation strategies, rather than studying clinical outcomes of a digital therapeutic. It includes 22 primary care clinics from a healthcare system in Washington State and patients with unhealthy substance use who visit clinics during an active implementation period (up to one year). Primary care clinics implemented reSET and reSET-O using a multifaceted implementation strategy previously used by clinical leaders to roll-out smartphone apps ("standard implementation" including discrete strategies such as clinician training, electronic health record tools). Clinics were randomized as 21 sites in a 2x2 factorial design to receive up to two added implementation strategies: (1) practice facilitation, and/or (2) health coaching. Outcome data are derived from electronic health records and logs of digital therapeutic usage. Aim 1's primary outcomes include reach of the digital therapeutics to patients and fidelity of patients' use of the digital therapeutics to clinical recommendations. Substance use and engagement in SUD care are additional outcomes. In Aim 2, population-level cost effectiveness analysis will inform the economic benefit of the implementation strategies compared to standard implementation. Implementation is monitored using formative evaluation, and sustainment will be studied for up to one year using qualitative and quantitative research methods. DISCUSSION: The DIGITS Trial uses an experimental design to test whether implementation strategies increase and improve the delivery of digital therapeutics for SUDs when embedded in a large healthcare system. It will provide data on the potential benefits and cost-effectiveness of alternative implementation strategies. CLINICALTRIALS: gov Identifier: NCT05160233 (Submitted 12/3/2021). https://clinicaltrials.gov/ct2/show/NCT05160233.


Subject(s)
Delivery of Health Care , Opioid-Related Disorders , Humans , Behavior Therapy , Analgesics, Opioid , Opioid-Related Disorders/drug therapy , Primary Health Care , Randomized Controlled Trials as Topic
3.
J Am Geriatr Soc ; 69(8): 2335-2343, 2021 08.
Article in English | MEDLINE | ID: mdl-33721340

ABSTRACT

BACKGROUND: More than three million Americans turn 65 each year and newly enroll in Medicare, making this one of the most common insurance transitions. Non-Medicare insurance transitions are associated with changes in health, healthcare utilization and costs. In addition, older Americans have higher morbidity, mortality, healthcare utilization, and healthcare costs than the general population. However, the effect of new Medicare enrollment on these outcomes is unclear. DESIGN: We conducted a scoping review to rigorously identify the scope of evidence on the association between new Medicare enrollment and health, healthcare utilization and costs. SETTING: We included English-language, peer-reviewed, studies cataloged in Medline (PubMed) and EconLit from 1998 to 2018. PARTICIPANTS: Individuals newly enrolling in Medicare. MEASUREMENTS: We measured health (e.g., self-reported health), healthcare utilization (e.g., provider visits, preventive care, and hospitalizations) and costs (e.g., patient out-of-pocket and health plan spending). RESULTS: We screened 5265 articles and included 20 articles. New Medicare enrollment was found to increase self-reported health and healthcare utilization overall, as well as reduce disparities across racial and socioeconomic strata. Provider visits, preventive care and hospitalizations all increased. However, patient out-of-pocket spending decreased, and health plan spending also decreased, when Medicare's lower prices were accounted for. Few studies compared outcomes among new Medicare Advantage enrollees with new Medicare fee-for-service enrollees. None of the studies specifically evaluated the effect of new Medicare enrollment on adults with multiple chronic conditions. CONCLUSION: New Medicare enrollment improves access overall and reduces access disparities. However, the impact of new Medicare enrollment among subgroups defined by insurance coverage type and number of chronic conditions is less clear. Future work should also evaluate the mechanism for increases in hospitalizations.


Subject(s)
Health Care Costs/statistics & numerical data , Medicare/statistics & numerical data , Patient Acceptance of Health Care/statistics & numerical data , Aged , Ambulatory Care/statistics & numerical data , Hospitalization/statistics & numerical data , Humans , United States
4.
Am J Health Promot ; 35(3): 421-429, 2021 03.
Article in English | MEDLINE | ID: mdl-33504161

ABSTRACT

PURPOSE: To explore financial incentives as an intervention to improve colorectal cancer screening (CRCS) adherence among traditionally disadvantaged patients who have never been screened or are overdue for screening. APPROACH: We used qualitative methods to describe patients' attitudes toward the offer of incentives, plans for future screening, and additional barriers and facilitators to CRCS. SETTING: Kaiser Permanente Washington (KPWA). PARTICIPANTS: KPWA patients who were due or overdue for CRCS. METHOD: We conducted semi-structured qualitative interviews with 37 patients who were randomized to 1 of 2 incentives (guaranteed $10 or a lottery for $50) to complete CRCS. Interview transcripts were analyzed using a qualitative content approach. RESULTS: Patients generally had positive attitudes toward both types of incentives, however, half did not recall the incentive offer at the time of the interview. Among those who recalled the offer, 95% were screened compared to only 25% among those who did not remember the offer. Most screeners stated that staying healthy was their primary motivator for screening, but many suggested that the incentive helped them prioritize and complete screening. CONCLUSIONS: Incentives to complete CRCS may help motivate patients who would like to screen but have previously procrastinated. Future studies should ensure that the incentive offer is noticeable and shorten the deadline for completion of FIT screening.


Subject(s)
Colorectal Neoplasms , Motivation , Colorectal Neoplasms/diagnosis , Early Detection of Cancer , Humans , Mass Screening , Washington
5.
Implement Res Pract ; 2: 26334895211002474, 2021.
Article in English | MEDLINE | ID: mdl-37089997

ABSTRACT

Background: Measurement is a critical component for any field. Systematic reviews are a way to locate measures and uncover gaps in current measurement practices. The present study identified measures used in behavioral health settings that assessed all constructs within the Process domain and two constructs from the Inner setting domain as defined by the Consolidated Framework for Implementation Research (CFIR). While previous conceptual work has established the importance social networks and key stakeholders play throughout the implementation process, measurement studies have not focused on investigating the quality of how these activities are being carried out. Methods: The review occurred in three phases: Phase I, data collection included (1) search string generation, (2) title and abstract screening, (3) full text review, (4) mapping to CFIR-constructs, and (5) "cited-by" searches. Phase II, data extraction, consisted of coding information relevant to the nine psychometric properties included in the Psychometric And Pragmatic Rating Scale (PAPERS). In Phase III, data analysis was completed. Results: Measures were identified in only seven constructs: Structural characteristics (n = 13), Networks and communication (n = 29), Engaging (n = 1), Opinion leaders (n = 5), Champions (n = 5), Planning (n = 5), and Reflecting and evaluating (n = 5). No quantitative assessment measures of Formally appointed implementation leaders, External change agents, or Executing were identified. Internal consistency and norms were reported on most often, whereas no studies reported on discriminant validity or responsiveness. Not one measure in the sample reported all nine psychometric properties evaluated by the PAPERS. Scores in the identified sample of measures ranged from "-2" to "10" out of a total of "36." Conclusions: Overall measures demonstrated minimal to adequate evidence and available psychometric information was limited. The majority were study specific, limiting their generalizability. Future work should focus on more rigorous measure development and testing of currently existing measures, while moving away from creating new, single use measures. Plain Language Summary: How we measure the processes and players involved for implementing evidence-based interventions is crucial to understanding what factors are helping or hurting the intervention's use in practice and how to take the intervention to scale. Unfortunately, measures of these factors-stakeholders, their networks and communication, and their implementation activities-have received little attention. This study sought to identify and evaluate the quality of these types of measures. Our review focused on collecting measures used for identifying influential staff members, known as opinion leaders and champions, and investigating how they plan, execute, engage, and evaluate the hard work of implementation. Upon identifying these measures, we collected all published information about their uses to evaluate the quality of their evidence with respect to their ability to produce consistent results across items within each use (i.e., reliable) and if they assess what they are intending to measure (i.e., valid). Our searches located over 40 measures deployed in behavioral health settings for evaluation. We observed a dearth of evidence for reliability and validity and when evidence existed the quality was low. These findings tell us that more measurement work is needed to better understand how to optimize players and processes for the purposes of successful implementation.

6.
Implement Res Pract ; 2: 26334895211018862, 2021.
Article in English | MEDLINE | ID: mdl-37090009

ABSTRACT

Background: Organizational culture, organizational climate, and implementation climate are key organizational constructs that influence the implementation of evidence-based practices. However, there has been little systematic investigation of the availability of psychometrically strong measures that can be used to assess these constructs in behavioral health. This systematic review identified and assessed the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs as defined by the Consolidated Framework for Implementation Research (CFIR) and Ehrhart and colleagues. Methods: Data collection involved search string generation, title and abstract screening, full-text review, construct assignment, and citation searches for all known empirical uses. Data relevant to nine psychometric criteria from the Psychometric and Pragmatic Evidence Rating Scale (PAPERS) were extracted: internal consistency, convergent validity, discriminant validity, known-groups validity, predictive validity, concurrent validity, structural validity, responsiveness, and norms. Extracted data for each criterion were rated on a scale from -1 ("poor") to 4 ("excellent"), and each measure was assigned a total score (highest possible score = 36) that formed the basis for head-to-head comparisons of measures for each focal construct. Results: We identified full measures or relevant subscales of broader measures for organizational culture (n = 21), organizational climate (n = 36), implementation climate (n = 2), tension for change (n = 2), compatibility (n = 6), relative priority (n = 2), organizational incentives and rewards (n = 3), goals and feedback (n = 3), and learning climate (n = 2). Psychometric evidence was most frequently available for internal consistency and norms. Information about other psychometric properties was less available. Median ratings for psychometric properties across categories of measures ranged from "poor" to "good." There was limited evidence of responsiveness or predictive validity. Conclusion: While several promising measures were identified, the overall state of measurement related to these constructs is poor. To enhance understanding of how these constructs influence implementation research and practice, measures that are sensitive to change and predictive of key implementation and clinical outcomes are required. There is a need for further testing of the most promising measures, and ample opportunity to develop additional psychometrically strong measures of these important constructs. Plain Language Summary: Organizational culture, organizational climate, and implementation climate can play a critical role in facilitating or impeding the successful implementation and sustainment of evidence-based practices. Advancing our understanding of how these contextual factors independently or collectively influence implementation and clinical outcomes requires measures that are reliable and valid. Previous systematic reviews identified measures of organizational factors that influence implementation, but none focused explicitly on behavioral health; focused solely on organizational culture, organizational climate, and implementation climate; or assessed the evidence base of all known uses of a measure within a given area, such as behavioral health-focused implementation efforts. The purpose of this study was to identify and assess the psychometric properties of measures of organizational culture, organizational climate, implementation climate, and related subconstructs that have been used in behavioral health-focused implementation research. We identified 21 measures of organizational culture, 36 measures of organizational climate, 2 measures of implementation climate, 2 measures of tension for change, 6 measures of compatibility, 2 measures of relative priority, 3 measures of organizational incentives and rewards, 3 measures of goals and feedback, and 2 measures of learning climate. Some promising measures were identified; however, the overall state of measurement across these constructs is poor. This review highlights specific areas for improvement and suggests the need to rigorously evaluate existing measures and develop new measures.

7.
Transl Behav Med ; 11(1): 11-20, 2021 02 11.
Article in English | MEDLINE | ID: mdl-31747021

ABSTRACT

The use of reliable, valid measures in implementation practice will remain limited without pragmatic measures. Previous research identified the need for pragmatic measures, though the characteristic identification used only expert opinion and literature review. Our team completed four studies to develop a stakeholder-driven pragmatic rating criteria for implementation measures. We published Studies 1 (identifying dimensions of the pragmatic construct) and 2 (clarifying the internal structure) that engaged stakeholders-participants in mental health provider and implementation settings-to identify 17 terms/phrases across four categories: Useful, Compatible, Acceptable, and Easy. This paper presents Studies 3 and 4: a Delphi to ascertain stakeholder-prioritized dimensions within a mental health context, and a pilot study applying the rating criteria. Stakeholders (N = 26) participated in a Delphi and rated the relevance of 17 terms/phrases to the pragmatic construct. The investigator team further defined and shortened the list, which were piloted with 60 implementation measures. The Delphi confirmed the importance of all pragmatic criteria, but provided little guidance on relative importance. The investigators removed or combined terms/phrases to obtain 11 criteria. The 6-point rating system assigned to each criterion demonstrated sufficient variability across items. The grey literature did not add critical information. This work produced the first stakeholder-driven rating criteria to assess whether measures are pragmatic. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) combines the pragmatic criteria with psychometric rating criteria, from previous work. Use of PAPERS can inform development of implementation measures and to assess the quality of existing measures.


Subject(s)
Psychometrics , Humans , Pilot Projects , Reproducibility of Results
8.
Implement Sci ; 15(1): 47, 2020 06 19.
Article in English | MEDLINE | ID: mdl-32560661

ABSTRACT

BACKGROUND: Public policy has tremendous impacts on population health. While policy development has been extensively studied, policy implementation research is newer and relies largely on qualitative methods. Quantitative measures are needed to disentangle differential impacts of policy implementation determinants (i.e., barriers and facilitators) and outcomes to ensure intended benefits are realized. Implementation outcomes include acceptability, adoption, appropriateness, compliance/fidelity, feasibility, penetration, sustainability, and costs. This systematic review identified quantitative measures that are used to assess health policy implementation determinants and outcomes and evaluated the quality of these measures. METHODS: Three frameworks guided the review: Implementation Outcomes Framework (Proctor et al.), Consolidated Framework for Implementation Research (Damschroder et al.), and Policy Implementation Determinants Framework (Bullock et al.). Six databases were searched: Medline, CINAHL Plus, PsycInfo, PAIS, ERIC, and Worldwide Political. Searches were limited to English language, peer-reviewed journal articles published January 1995 to April 2019. Search terms addressed four levels: health, public policy, implementation, and measurement. Empirical studies of public policies addressing physical or behavioral health with quantitative self-report or archival measures of policy implementation with at least two items assessing implementation outcomes or determinants were included. Consensus scoring of the Psychometric and Pragmatic Evidence Rating Scale assessed the quality of measures. RESULTS: Database searches yielded 8417 non-duplicate studies, with 870 (10.3%) undergoing full-text screening, yielding 66 studies. From the included studies, 70 unique measures were identified to quantitatively assess implementation outcomes and/or determinants. Acceptability, feasibility, appropriateness, and compliance were the most commonly measured implementation outcomes. Common determinants in the identified measures were organizational culture, implementation climate, and readiness for implementation, each aspects of the internal setting. Pragmatic quality ranged from adequate to good, with most measures freely available, brief, and at high school reading level. Few psychometric properties were reported. CONCLUSIONS: Well-tested quantitative measures of implementation internal settings were under-utilized in policy studies. Further development and testing of external context measures are warranted. This review is intended to stimulate measure development and high-quality assessment of health policy implementation outcomes and determinants to help practitioners and researchers spread evidence-informed policies to improve population health. REGISTRATION: Not registered.


Subject(s)
Health Policy , Implementation Science , Attitude of Health Personnel , Guideline Adherence/standards , Humans , Organizational Culture , Practice Guidelines as Topic/standards , Psychometrics
9.
Implement Res Pract ; 1: 2633489520933896, 2020.
Article in English | MEDLINE | ID: mdl-37089124

ABSTRACT

Background: Systematic measure reviews can facilitate advances in implementation research and practice by locating reliable, valid, pragmatic measures; identifying promising measures needing refinement and testing; and highlighting measurement gaps. This review identifies and evaluates the psychometric and pragmatic properties of measures of readiness for implementation and its sub-constructs as delineated in the Consolidated Framework for Implementation Research: leadership engagement, available resources, and access to knowledge and information. Methods: The systematic review methodology is described fully elsewhere. The review, which focused on measures used in mental or behavioral health, proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and cited citation searches. Phase II, data extraction, involved coding relevant psychometric and pragmatic information. Phase III, data analysis, involved two trained specialists independently rating each measure using Psychometric and Pragmatic Evidence Rating Scales (PAPERS). Frequencies and central tendencies summarized information availability and PAPERS ratings. Results: Searches identified 9 measures of readiness for implementation, 24 measures of leadership engagement, 17 measures of available resources, and 6 measures of access to knowledge and information. Information about internal consistency was available for most measures. Information about other psychometric properties was often not available. Ratings for internal consistency were "adequate" or "good." Ratings for other psychometric properties were less than "adequate." Information on pragmatic properties was most often available regarding cost, language readability, and brevity. Information was less often available regarding training burden and interpretation burden. Cost and language readability generally exhibited "good" or "excellent" ratings, interpretation burden generally exhibiting "minimal" ratings, and training burden and brevity exhibiting mixed ratings across measures. Conclusion: Measures of readiness for implementation and its sub-constructs used in mental health and behavioral health care are unevenly distributed, exhibit unknown or low psychometric quality, and demonstrate mixed pragmatic properties. This review identified a few promising measures, but targeted efforts are needed to systematically develop and test measures that are useful for both research and practice. Plain language abstract: Successful implementation of effective mental health or behavioral health treatments in service delivery settings depends in part on the readiness of the service providers and administrators to implement the treatment; the engagement of organizational leaders in the implementation effort; the resources available to support implementation, such as time, money, space, and training; and the accessibility of knowledge and information among service providers about the treatment and how it works. It is important that the methods for measuring these factors are dependable, accurate, and practical; otherwise, we cannot assess their presence or strength with confidence or know whether efforts to increase their presence or strength have worked. This systematic review of published studies sought to identify and evaluate the quality of questionnaires (referred to as measures) that assess readiness for implementation, leadership engagement, available resources, and access to knowledge and information. We identified 56 measures of these factors and rated their quality in terms of how dependable, accurate, and practical they are. Our findings indicate there is much work to be done to improve the quality of available measures; we offer several recommendations for doing so.

10.
Implement Res Pract ; 1: 2633489520940022, 2020.
Article in English | MEDLINE | ID: mdl-37089125

ABSTRACT

Background: Despite their influence, outer setting barriers (e.g., policies, financing) are an infrequent focus of implementation research. The objective of this systematic review was to identify and assess the psychometric properties of measures of outer setting used in behavioral and mental health research. Methods: Data collection involved (a) search string generation, (b) title and abstract screening, (c) full-text review, (d) construct mapping, and (e) measure forward searches. Outer setting constructs were defined using the Consolidated Framework for Implementation Research (CFIR). The search strategy included four relevant constructs separately: (a) cosmopolitanism, (b) external policy and incentives, (c) patient needs and resources, and (d) peer pressure. Information was coded using nine psychometric criteria: (a) internal consistency, (b) convergent validity, (c) discriminant validity, (d) known-groups validity, (e) predictive validity, (f) concurrent validity, (g) structural validity, (h) responsiveness, and (i) norms. Frequencies were calculated to summarize the availability of psychometric information. Information quality was rated using a 5-point scale and a final median score was calculated for each measure. Results: Systematic searches yielded 20 measures: four measures of the general outer setting domain, seven of cosmopolitanism, four of external policy and incentives, four of patient needs and resources, and one measure of peer pressure. Most were subscales within full scales assessing implementation context. Typically, scales or subscales did not have any psychometric information available. Where information was available, the quality was most often rated as "1-minimal" or "2-adequate." Conclusion: To our knowledge, this is the first systematic review to focus exclusively on measures of outer setting factors used in behavioral and mental health research and comprehensively assess a range of psychometric criteria. The results highlight the limited quantity and quality of measures at this level. Researchers should not assume "one size fits all" when measuring outer setting constructs. Some outer setting constructs may be more appropriately and efficiently assessed using objective indices or administrative data reflective of the system rather than the individual.

11.
Am J Prev Med ; 57(6 Suppl 1): S13-S24, 2019 12.
Article in English | MEDLINE | ID: mdl-31753276

ABSTRACT

CONTEXT: Health systems increasingly are exploring implementation of standardized social risk assessments. Implementation requires screening tools both with evidence of validity and reliability (psychometric properties) and that are low cost, easy to administer, readable, and brief (pragmatic properties). These properties for social risk assessment tools are not well understood and could help guide selection of assessment tools and future research. EVIDENCE ACQUISITION: The systematic review was conducted during 2018 and included literature from PubMed and CINAHL published between 2000 and May 18, 2018. Included studies were based in the U.S., included tools that addressed at least 2 social risk factors (economic stability, education, social and community context, healthcare access, neighborhood and physical environment, or food), and were administered in a clinical setting. Manual literature searching was used to identify empirical uses of included screening tools. Data on psychometric and pragmatic properties of each tool were abstracted. EVIDENCE SYNTHESIS: Review of 6,838 unique citations yielded 21 unique screening tools and 60 articles demonstrating empirical uses of the included screening tools. Data on psychometric properties were sparse, and few tools reported use of gold standard measurement development methods. Review of pragmatic properties indicated that tools were generally low cost, written for low-literacy populations, and easy to administer. CONCLUSIONS: Multiple low-cost, low literacy tools are available for social risk screening in clinical settings, but psychometric data are very limited. More research is needed on clinic-based screening tool reliability and validity as these factors should influence both adoption and utility. SUPPLEMENT INFORMATION: This article is part of a supplement entitled Identifying and Intervening on Social Needs in Clinical Settings: Evidence and Evidence Gaps, which is sponsored by the Agency for Healthcare Research and Quality of the U.S. Department of Health and Human Services, Kaiser Permanente, and the Robert Wood Johnson Foundation.


Subject(s)
Mass Screening , Psychometrics , Risk Assessment , Social Determinants of Health , Humans , Reproducibility of Results , Surveys and Questionnaires
12.
BMC Health Serv Res ; 18(1): 882, 2018 Nov 22.
Article in English | MEDLINE | ID: mdl-30466422

ABSTRACT

CONTEXT: Implementation science measures are rarely used by stakeholders to inform and enhance clinical program change. Little is known about what makes implementation measures pragmatic (i.e., practical) for use in community settings; thus, the present study's objective was to generate a clinical stakeholder-driven operationalization of a pragmatic measures construct. EVIDENCE ACQUISITION: The pragmatic measures construct was defined using: 1) a systematic literature review to identify dimensions of the construct using PsycINFO and PubMed databases, and 2) interviews with an international stakeholder panel (N = 7) who were asked about their perspectives of pragmatic measures. EVIDENCE SYNTHESIS: Combined results from the systematic literature review and stakeholder interviews revealed a final list of 47 short statements (e.g., feasible, low cost, brief) describing pragmatic measures, which will allow for the development of a rigorous, stakeholder-driven conceptualization of the pragmatic measures construct. CONCLUSIONS: Results revealed significant overlap between terms related to the pragmatic construct in the existing literature and stakeholder interviews. However, a number of terms were unique to each methodology. This underscores the importance of understanding stakeholder perspectives of criteria measuring the pragmatic construct. These results will be used to inform future phases of the project where stakeholders will determine the relative importance and clarity of each dimension of the pragmatic construct, as well as their priorities for the pragmatic dimensions. Taken together, these results will be incorporated into a pragmatic rating system for existing implementation science measures to support implementation science and practice.


Subject(s)
Feedback , Implementation Science , Communication , Female , Humans , Male , Middle Aged , Research Design
13.
Syst Rev ; 7(1): 66, 2018 04 25.
Article in English | MEDLINE | ID: mdl-29695295

ABSTRACT

BACKGROUND: Implementation science is the study of strategies used to integrate evidence-based practices into real-world settings (Eccles and Mittman, Implement Sci. 1(1):1, 2006). Central to the identification of replicable, feasible, and effective implementation strategies is the ability to assess the impact of contextual constructs and intervention characteristics that may influence implementation, but several measurement issues make this work quite difficult. For instance, it is unclear which constructs have no measures and which measures have any evidence of psychometric properties like reliability and validity. As part of a larger set of studies to advance implementation science measurement (Lewis et al., Implement Sci. 10:102, 2015), we will complete systematic reviews of measures that map onto the Consolidated Framework for Implementation Research (Damschroder et al., Implement Sci. 4:50, 2009) and the Implementation Outcomes Framework (Proctor et al., Adm Policy Ment Health. 38(2):65-76, 2011), the protocol for which is described in this manuscript. METHODS: Our primary databases will be PubMed and Embase. Our search strings will be comprised of five levels: (1) the outcome or construct term; (2) terms for measure; (3) terms for evidence-based practice; (4) terms for implementation; and (5) terms for mental health. Two trained research specialists will independently review all titles and abstracts followed by full-text review for inclusion. The research specialists will then conduct measure-forward searches using the "cited by" function to identify all published empirical studies using each measure. The measure and associated publications will be compiled in a packet for data extraction. Data relevant to our Psychometric and Pragmatic Evidence Rating Scale (PAPERS) will be independently extracted and then rated using a worst score counts methodology reflecting "poor" to "excellent" evidence. DISCUSSION: We will build a centralized, accessible, searchable repository through which researchers, practitioners, and other stakeholders can identify psychometrically and pragmatically strong measures of implementation contexts, processes, and outcomes. By facilitating the employment of psychometrically and pragmatically strong measures identified through this systematic review, the repository would enhance the cumulativeness, reproducibility, and applicability of research findings in the rapidly growing field of implementation science.


Subject(s)
Evidence-Based Practice , Health Plan Implementation , Systematic Reviews as Topic , Humans , Health Plan Implementation/methods
15.
Implement Sci ; 12(1): 137, 2017 Nov 21.
Article in English | MEDLINE | ID: mdl-29162150

ABSTRACT

BACKGROUND: The recent growth in organized efforts to advance dissemination and implementation (D & I) science suggests a rapidly expanding community focused on the adoption and sustainment of evidence-based practices (EBPs). Although promising for the D & I of EBPs, the proliferation of initiatives is difficult for any one individual to navigate and summarize. Such proliferation may also result in redundant efforts or missed opportunities for participation and advancement. A review of existing D & I science resource initiatives and their unique merits would be a significant step for the field. The present study aimed to describe the global landscape of these organized efforts to advance D & I science. METHODS: We conducted a content analysis between October 2015 and March 2016 to examine resources and characteristics of D & I science resource initiatives using public, web-based information. Included resource initiatives must have engaged in multiple efforts to advance D & I science beyond conferences, offered D & I science resources, and provided content in English. The sampling method included an Internet search using D & I terms and inquiry among internationally representative D & I science experts. Using a coding scheme based on a priori and grounded approaches, two authors consensus coded website information including interactive and non-interactive resources and information regarding accessibility (membership, cost, competitive application, and location). RESULTS: The vast majority (83%) of resource initiatives offered at least one of seven interactive resources (consultation/technical assistance, mentorship, workshops, workgroups, networking, conferences, and social media) and one of six non-interactive resources (resource library, news and updates from the field, archived talks or slides, links pages, grant writing resources, and funding opportunities). Non-interactive resources were most common, with some appearing frequently across resource initiatives (e.g., news and updates from the field). CONCLUSION: Findings generated by this study offer insight into what types of D & I science resources exist and what new resources may have the greatest potential to make a unique and needed contribution to the field. Additional interactive resources may benefit the field, particularly mentorship opportunities and resources that can be accessed virtually. Moving forward, it may be useful to consider strategic attention to the core tenets of D & I science put forth by Glasgow and colleagues to most efficiently and effectively advance the field.


Subject(s)
Evidence-Based Practice/methods , Health Plan Implementation/methods , Information Dissemination/methods , Translational Research, Biomedical/methods , Humans
16.
Implement Sci ; 12(1): 118, 2017 10 03.
Article in English | MEDLINE | ID: mdl-28974248

ABSTRACT

BACKGROUND: Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria's clarity and importance. METHODS: Twenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data. FINDINGS: The 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented. CONCLUSIONS: This study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures.


Subject(s)
Health Plan Implementation/methods , Health Services Research/methods , Stakeholder Participation , Cluster Analysis , Humans , Interviews as Topic
17.
Implement Sci ; 12(1): 108, 2017 08 29.
Article in English | MEDLINE | ID: mdl-28851459

ABSTRACT

BACKGROUND: Implementation outcome measures are essential for monitoring and evaluating the success of implementation efforts. Yet, currently available measures lack conceptual clarity and have largely unknown reliability and validity. This study developed and psychometrically assessed three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM). METHODS: Thirty-six implementation scientists and 27 mental health professionals assigned 31 items to the constructs and rated their confidence in their assignments. The Wilcoxon one-sample signed rank test was used to assess substantive and discriminant content validity. Exploratory and confirmatory factor analysis (EFA and CFA) and Cronbach alphas were used to assess the validity of the conceptual model. Three hundred twenty-six mental health counselors read one of six randomly assigned vignettes depicting a therapist contemplating adopting an evidence-based practice (EBP). Participants used 15 items to rate the therapist's perceptions of the acceptability, appropriateness, and feasibility of adopting the EBP. CFA and Cronbach alphas were used to refine the scales, assess structural validity, and assess reliability. Analysis of variance (ANOVA) was used to assess known-groups validity. Finally, half of the counselors were randomly assigned to receive the same vignette and the other half the opposite vignette; and all were asked to re-rate acceptability, appropriateness, and feasibility. Pearson correlation coefficients were used to assess test-retest reliability and linear regression to assess sensitivity to change. RESULTS: All but five items exhibited substantive and discriminant content validity. A trimmed CFA with five items per construct exhibited acceptable model fit (CFI = 0.98, RMSEA = 0.08) and high factor loadings (0.79 to 0.94). The alphas for 5-item scales were between 0.87 and 0.89. Scale refinement based on measure-specific CFAs and Cronbach alphas using vignette data produced 4-item scales (α's from 0.85 to 0.91). A three-factor CFA exhibited acceptable fit (CFI = 0.96, RMSEA = 0.08) and high factor loadings (0.75 to 0.89), indicating structural validity. ANOVA showed significant main effects, indicating known-groups validity. Test-retest reliability coefficients ranged from 0.73 to 0.88. Regression analysis indicated each measure was sensitive to change in both directions. CONCLUSIONS: The AIM, IAM, and FIM demonstrate promising psychometric properties. Predictive validity assessment is planned.


Subject(s)
Health Plan Implementation/methods , Health Plan Implementation/statistics & numerical data , Outcome Assessment, Health Care/methods , Outcome Assessment, Health Care/statistics & numerical data , Surveys and Questionnaires , Factor Analysis, Statistical , Feasibility Studies , Female , Humans , Male , Psychometrics , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...